Web Survey Bibliography
Relevance & Research Question: Online questionnaires are a common utility for research companies. While survey software products offer many features, respondent fraud and indifferent or inattentive respondent behaviour remain critical issues. How can responses with such bad quality be identified in an automated process?
Methods & Data: The author proposes a post-fieldwork approach which is based on behaviour pattern detection and does not rely on control or trap questions. Using response quality indicators as well as discriminant analysis, logistic regression and an optional flag variable, responses are classified with regard to their quality. The 17 indicators focus on aspects such as response differentiation to open-ended questions, the time spent for answering the survey and monotonous behaviour in response to matrix questions. For the procedure to work, the survey should include open-ended questions, several matrix questions as well as a minimum of ten questions overall. An incentivized survey containing quality-related trap questions and other control measures is sent out to a Facebook river sample (n = 134) as well as a commercial panel sample (n = 1,000). This survey is used to generate a standard classification. Another five survey data sets from past real-case projects are finally used to examine the effectiveness of the procedure developed (157 <= n <= 2,603). R is used for calculating indicators, SPSS for discriminant and regression analysis. The automation process is specifically designed for QuestBack EFS software.
Results: Depending on the data, the procedure identifies between 2.5 and 5.2 per cent of all respondents as low-quality respondents. Judging from their indicator values, their behaviour is clearly suspected to indicate bad quality. Therefore they should be considered to be removed from the sample.
Added Value: The approach offers straight-forward ways of judging whether survey responses should be considered trustworthy in comparison to one another. This knowledge supports post-fieldwork data cleansing and reduces effects of distortion by low-quality data. The procedure is ready for implementation in the EFS software.
Web survey bibliography - General Online Research Conference (GOR) 2015 (9)
- Higher response rates at the expense of validity? Consequences of the implementation of the ‘forced...; 2015; Decieux, J. P.; Mergener, A.; Neufang, K.; Sischka, P.
- Development and Validation of a Scale for Social Exhibitionism on the Internet (SEXI); 2015; Vetter, M.; Eib, C.; Hill-Kloss, S.; Wollscheid, P.; Hagemann, D.
- A quasi-experiment on effects of prepaid versus promised incentives on participation in a probability...; 2015; Schaurer, I.; Bosnjak, M.
- Online Eye-Tracking of Dynamic Advertising Content in (Mobile) Web-Surveys; 2015; Berger, S.
- Deep impact or no impact, evaluating opportunities for a new question type: Statement allocation on...; 2015; Schmidt, S.
- Approaches for Evaluating Online Survey Response Quality; 2015; Gluck, N.
- Coding Surveys on their Item Characteristics: Reliability Diagnostics; 2015; Bais, F.; Schouten, B.; Toepoel, V.
- Predicting Response Times in Web Surveys; 2015; Wenz, A.
- Positioning of Clarification Features in Open Frequency and Open Narrative Questions; 2015; Fuchs, M.; Metzler, A.